skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Patel, Hershel"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Goel, S (Ed.)
    Federated Learning (FL), an emerging decentralized Machine Learning (ML) approach, offers a promising avenue for training models on distributed data while safeguarding individual privacy. Nevertheless, when imple- mented in real ML applications, adversarial attacks that aim to deteriorate the quality of the local training data and to compromise the performance of the resulting model still remaining a challenge. In this paper, we propose and develop an approach that integrates Reputation and Trust techniques into the conventional FL. These techniques incur a novel local models’ pre-processing step performed before the aggregation procedure, in which we cluster the local model updates in their parameter space and employ clustering results to evaluate trust towards each of the local clients. The trust value is updated in each aggregation round, and takes into account retrospective evaluations performed in the previous rounds that allow considering the history of updates to make the assessment more informative and reliable. Through our empirical study on a traffic signs classification computer vision application, we verify our novel approach that allow to identify local clients compromised by adversarial attacks and submitting updates detrimental to the FL performance. The local updates provided by non-trusted clients are excluded from aggregation, which allows to enhance FL security and robustness to the models that might be trained on corrupted data. 
    more » « less